parallel processing

Terms from Artificial Intelligence: humans at the heart of algorithms

Page numbers are for draft copy at present; they will be replaced with correct numbers when final book is formatted. Chapter numbers are correct and will not change now.

Parallel processing is when several computers or computing elements within a single computer can run computation at the same time (in parallel). It may occur at a fairly large scale, such as two separate computers or two cores in a single CPU. It may also occur at a fine scale, for example the way a GPU does the same operation to many streams of data similtaneously. Parallel processing is often divided into two main types SIMD (single instruction, multiple data) when the same things happens to lots of data in parallel, as in the case of the GPU; and MIMD (multiple instruction, multiple data) when different code may operate at the same time, as in the case of CPU cores.

Used on Chap. 6: pages 110, 126; Chap. 8: page 159; Chap. 12: pages 252, 263

Also known as parallelism